feat: add SGD optimizer for neural networks #13671
                
     Open
            
            
          
  Add this suggestion to a batch that can be applied as a single commit.
  This suggestion is invalid because no changes were made to the code.
  Suggestions cannot be applied while the pull request is closed.
  Suggestions cannot be applied while viewing a subset of changes.
  Only one suggestion per line can be applied in a batch.
  Add this suggestion to a batch that can be applied as a single commit.
  Applying suggestions on deleted lines is not supported.
  You must change the existing code in this line in order to create a valid suggestion.
  Outdated suggestions cannot be applied.
  This suggestion has been applied or marked resolved.
  Suggestions cannot be applied from pending reviews.
  Suggestions cannot be applied on multi-line comments.
  Suggestions cannot be applied while the pull request is queued to merge.
  Suggestion cannot be applied right now. Please check back later.
  
    
  
    
Describe your change:
This PR adds the Stochastic Gradient Descent (SGD) optimizer under
machine_learning/neural_network/optimizers/sgd.py. SGD is a fundamental optimizer used for training neural networks and deep learning models.Key features included in this PR:
test_sgd.pyto validate functionalityThis PR is the first step in adding a sequence of neural network optimizers:
Momentum SGD, Nesterov Accelerated Gradient (NAG), Adagrad, Adam, and Muon.
This implementation provides a reference-quality example and lays the foundation for future contributions.
Checklist:
Fixes #13662